Skip to main content

Docker

Docker has quite an amount of buzz around it today because it makes so many things easy that were difficult with virtual machines.Docker containers makes it easy for Developers, Systems Administrators, Architects, Consultants and others to quickly test a piece of software in a container; much quicker than a virtual machine, and using less resources. The average command in Docker takes under a second to complete. Docker is an open source platform for building, deploying, and managing containerized applications.

About docker

Docker is a software platform for building applications based on containers small and lightweight execution environments that make shared use of the operating system kernel but otherwise run in isolation from one another. While containers have been used in Linux and Unix systems for some time, Docker, an open source project launched in 2013, helped popularize the technology by making it easier than ever for developers to package their software to “build once and run anywhere.”

Why use docker?

Docker enables developers to easily pack, ship, and run any application as a lightweight, portable, self-sufficient container, which can run virtually anywhere. Docker containers provide a way to get a grip on software. You can use Docker to wrap up an application in such a way that its deployment and runtime issues—how to expose it on a network, how to manage its use of storage and memory and I/O, how to control access permissions—are handled outside of the application itself, and in a way that is consistent across all “containerized” apps. You can run your Docker container on any OS-compatible host (Linux or Windows) that has the Docker runtime installed.

Importance of Docker?

Docker is so popular today that “Docker” and “containers” are used interchangeably. But the first container-related technologies were available for years, even decades before Docker was released to the public in 2013.

Docker enhanced the native Linux containerization capabilities with technologies that enable:

  • Improved and seamless portability
  • Even lighter weight and more granular updates
  • Automated container creation
  • Container versioning
  • Container reuse
  • Shared container libraries
Docker tools
  • DockerFile : Every Docker container starts with a simple text file containing instructions for how to build the Docker container image.
  • Docker images : Docker images contain executable application source code as well as all the tools, libraries, and dependencies that the application code needs to run as a container.
  • Docker containers : Docker containers are the live, running instances of Docker images. While Docker images are read-only files, containers are live, ephemeral, executable content.
  • Docker Hub : Docker Hub is the public repository of Docker images that is the “world’s largest library and community for container images.” It holds over 100,000 container images sourced from commercial software vendors, open-source projects, and individual developers.
  • Docker daemon : Docker daemon is a service running on your operating system, such as Microsoft Windows or Apple MacOS or iOS.
  • Docker registry : A Docker registry is a scalable open-source storage and distribution system for docker images.

Docker Hub Repository

Docker Hub

Docker Hub is a service provided by Docker for finding and sharing container images with your team. It is the world’s largest repository of container images with an array of content sources including container community developers, open source projects and independent software vendors (ISV) building and distributing their code in containers.

Users get access to free public repositories for storing and sharing images or can choose a subscription plan for private repositories.

Docker Hub provides the following major features:

  • Repositories
  • Teams & Organizations
  • Docker Official Images

What is Docker?

Docker is a software platform that simplifies the process of building, running, managing and distributing applications. It does this by virtualizing the operating system of the computer on which it is installed and running.

Why use Docker?

Docker is so popular today that “Docker” and “containers” are used interchangeably. But the first container-related technologies were available for years, even decades before Docker was released to the public in 2013.

Docker enhanced the native Linux containerization capabilities with technologies that enable:

  • Improved and seamless portability
  • Even lighter weight and more granular updates
  • Automated container creation
  • Container versioning
  • Container reuse
  • Shared container libraries

Docker tools

  • DockerFile : Every Docker container starts with a simple text file containing instructions for how to build the Docker container image.
  • Docker images : Docker images contain executable application source code as well as all the tools, libraries, and dependencies that the application code needs to run as a container.
  • Docker containers : Docker containers are the live, running instances of Docker images. While Docker images are read-only files, containers are live, ephemeral, executable content.
  • Docker Hub : Docker Hub is the public repository of Docker images that is the “world’s largest library and community for container images.” It holds over 100,000 container images sourced from commercial software vendors, open-source projects, and individual developers.
  • Docker daemon : Docker daemon is a service running on your operating system, such as Microsoft Windows or Apple MacOS or iOS.
  • Docker registry : A Docker registry is a scalable open-source storage and distribution system for docker images.

Role of Docker in distributing and shipping opensource software

  1. Docker is a container management tool It is an open-source tool for the deployment and management of containers. Docker is one of the most used, although is not the only one. It is a system designed to build and execute applications or services as isolated containers.
  2. Docker is not a hardware virtualization system When Docker was released, many people compared it to the hypervisor of virtual machines as VMware, KVM and Virtualbox. Even if Docker has some points in common with hypervisors, actually has a total different approach. Virtual machines emulate the hardware.
  3. Docker uses a layered file system Tools like Docker provide a deployment model based on images, which facilitate the sharing of an application or service among different environments. Each image file is layered, and anytime you edit the file a new level is created.
  4. Docker saves your time Docker allows you to save a lot of time on the setup process, which in some cases can be very long and expensive, both in terms of time and in terms of dedicated staff (consequently, also at the economical level).
  5. Docker saves your money You know, time is money. Docker enables to reduce costs, not only about dedicated staff but also about the infrastructure expenditure. With containers, unused memory and disk can be shared between instances.
  6. Docker has a large ecosystem of existing images Two years ago there were already more than 14.000 public Docker images available online. The majority of them is shared trough Docker Hub, a reference platform for who works with public Docker images.
  7. Docker is multiplatform Docker was born to manage Linux containers. However, now is possible to use it also with different operating systems using particular solutions.

Getting started with Docker Hub and using images from docker hub

Step-1: Sign up for a Docker account

Creating Docker ID: A Docker ID grants you access to Docker Hub repositories and allows you to explore images that are available from the community and verified publishers. You’ll also need a Docker ID to share images on Docker Hub.

Step-2: Create your first repository

To create a repository:

  1. Sign in to Docker Hub.

  2. Click Create a Repository on the Docker Hub welcome page:

  3. Name it my-private-repo.

  4. Set the visibility to Private.

  5. Click Create.

Step-3: Download and install Docker Desktop

  1. Download and install Docker Desktop. If on Linux, download Docker Engine.
  2. Sign into the Docker Desktop application using the Docker ID you created in Step 1.

Step-4: Build and push a container image to Docker Hub from your computer

  1. Start by creating a Dockerfile to specify your application as shown below:
# syntax=docker/dockerfile:1
FROM busybox
CMD echo "Hello world! This is my first Docker image."
  1. Run docker build -t <your_username>/my-private-repo . to build your Docker image.

  2. Run docker run <your_username>/my-private-repo to test your Docker image locally.

  3. Run docker push <your_username>/my-private-repo to push your Docker image to Docker Hub. You should see output similar to:

  4. Your repository in Docker Hub should now display a new latest tag under Tags

Benefits of Docker

  1. Return on Investment and Cost Savings Dockers first advantage is ROI. Especially for large, established companies, which need to generate steady revenue over the long term, the solution is only better if it can drive down costs while raising profits.
  2. Rapid Deployment It can decrease deployment to seconds. It is because of the fact that it can create a container for every process and even does not boot an OS. So, even without worrying about the cost to bring it up again, it would be higher than what is affordable, Data can be created as well as destroyed.
  3. Security Docker makes sure that applications that are running on containers are completely segregated and isolated from each other, from a security point of view, by granting us complete control over traffic flow and management.
  4. Simplicity and Faster Configurations The way Docker simplifies the matters is one of the key benefits of it. It gives flexibility to users to take their own configuration, put that into the code, and further deploy it without any problems.

Limitations of Docker

  1. Missing features There are a ton of feature requests are under progress, like container self-registration, and self-inspects, copying files from the host to the container, and many more.
  2. Data in the container There are times when a container goes down, so after that, it needs a backup and recovery strategy, although we have several solutions for that they are not automated or not very scalable yet.
  3. Run applications as fast as a bare-metal serve In comparison with the virtual machines, Docker containers have less overhead but not zero overhead. If we run, an application directly on a bare-metal server we get true bare-metal speed even without using containers or virtual machines. However, Containers don’t run at bare-metal speeds.
  4. Provide cross-platform compatibility The one major issue is if an application designed to run in a Docker container on Windows, then it can’t run on Linux or vice versa. However, Virtual machines are not subject to this limitation.

Top 10 Alternatives to Docker hub

  1. Amazon Elastic Container Registry (ECR)
  2. JFrog Artifactory.
  3. Azure Container Registry.
  4. Red Hat Quay.
  5. Harbor.

Docker

  • Open Platform for developing, packing and running applications.
  • It helps in separating the applications from the infrastructure to deliver the software quickly.
  • Basically, Reduce the delay between writing code and running it in production.

Features

  • It comprises of loosely isolated environment called as container.
  • Isolation and Security allows us to run many containers simultaneously on a given host.

What is container?

Containers are light weight environments and consists of everything needed to run the applications, So you don't rely on what is currently on the host. It supports the components of the application that we want to develop. The container becomes the unit for distributing and testing our application.

Using Docker

  • Docker helps in development life cycle by allowing developers to work in standardized environment using local containers which provide applications and services.
  • Developers can write code locally and share their work with others using Docker Containers.
  • You can push your application to test a environment and execute manual and automated tests.
  • bugs found can be fixed in development environment and redeploy them to the test environment for testing and validation.
  • Docker containers can run on a developer’s local laptop, on physical or virtual machines in a data center, on cloud providers, or in a mixture of environments.
  • It can dynamically manage workload. So it is preferred in high density environments and also fits for small and medium deployments wherewe require fewer resources.

Docker Architecture

Docker Architecture

  • It uses Client-Server Architecture Model.
  • The Docker client talks to Docker daemon, which does heavy lifting of building, running, and distributing your Docker containers.
  • The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon.
  • The Docker client and daemon communicate using a REST API, over UNIX sockets or a network interface.
  • Another Docker client is Docker Compose, that lets you work with applications consisting of a set of containers.

If you don't know what is Daemon then, understand it as a computer program that runs as a background process and it is ready to perform an operation whenever required.

Let's Understand actual flow of instructions :

  • Docker daemon listens to Docker API requests and manages objects.
  • Docker Client is the way users interacts with Docker. When you run docker commands it uses docker API and client can them communicate with more than one Daemon.
  • Docker registry stores docker images.

Tech Stack

  • Docker is written in the Go programming language and takes advantage of several features of the Linux kernel to deliver its functionality.
  • Docker uses a technology called namespaces to provide the isolated workspace called the container. When you run a container, Docker creates a set of namespaces for that container.

Project Demo

Installation and Setup

  • First of all we are supposed to Install Docker Engine. It is available for all Operating systems.

But here we will be looking for Linux Platforms (Ubuntu Focal 20.04 to be more specific). You can find about your OS from official site https://docs.docker.com/engine/install/

Note : Plz know that we are following Ubuntu guide. You must refer official site if you have different Operating System.


Installation Method

You can install Docker Engine in different ways, depending on your needs:

  • Most users set up Docker’s repositories and install from them, for ease of installation and upgrade tasks. This is the recommended approach.

  • Some users download the DEB package and install it manually and manage upgrades completely manually. This is useful in situations such as installing Docker on air-gapped systems with no access to the internet.

  • In testing and development environments, some users choose to use automated convenience scripts to install Docker.

Here We will follow Install using Repository Method !

  • Step 1 : Update the apt package index and install packages to allow apt to use a repository over HTTPS
Bash
$sudo apt-get update
$sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release

  • Step 2 : Add Docker’s official GPG key (Please Key as per your OS)
$curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
  • Step 3 : Use the following command to set up the stable repository.
$echo \
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
  • Step 4 : Use the following command locate your docker engine on your system files.
$sudo apt-get install docker.io

-Step 5 : Run the following command and check if everything worked well with your installation.

sudo docker run hello-world

You should expect this as your output :

image

Let's Build Project.

We will be making a simple python web application running on Docker Compose. We shall use Flask framework and implement hit counter in Redis. It will increment the number of times you the page.

Step 1 : Setup

  1. Create a directory for project.
$mkdir GWOC_OS_Docker
$cd GWOC_OS_Docker
  1. Create a file app.py and paste below code in it.
import time

import redis
from flask import Flask

app = Flask(__name__)
cache = redis.Redis(host='redis', port=6379)

def get_hit_count():
retries = 5
while True:
try:
return cache.incr('hits')
except redis.exceptions.ConnectionError as exc:
if retries == 0:
raise exc
retries -= 1
time.sleep(0.5)

@app.route('/')
def hello():
count = get_hit_count()
return 'Welcome to GWOC OpenSource Docker Learning! I have been seen {} times.\nContributed by Harshil-Jani'.format(count)

In this example, redis is the hostname of the redis container on the application’s network. We use the default port for Redis, 6379.

  1. Create another file called requirements.txt in your project directory and paste this in:
flask
redis

Step 2 : Create a Dockerfile

In this step, you write a Dockerfile that builds a Docker image. The image contains all the dependencies the Python application requires, including Python itself.

In your project directory, create a file named Dockerfile and paste the following:

# syntax=docker/dockerfile:1
FROM python:3.7-alpine
WORKDIR /code
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
RUN apk add --no-cache gcc musl-dev linux-headers
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
EXPOSE 5000
COPY . .
CMD ["flask", "run"]

This tells Docker to:

  • Build an image starting with the Python 3.7 image.
  • Set the working directory to /code.
  • Set environment variables used by the flask command.
  • Install gcc and other dependencies
  • Copy requirements.txt and install the Python dependencies.
  • Add metadata to the image to describe that the container is listening on port 5000
  • Copy the current directory . in the project to the workdir . in the image.
  • Set the default command for the container to flask run.

Step 3 : Define services in a Compose File

Create a file called docker-compose.yml in your project directory and paste the following:

version: "3.3"
services:
web:
build: .
ports:
- "5000:5000"
redis:
image: "redis:alpine"

This Compose file defines two services: web and redis.

Step 4 : Build and run you app with Compose

  1. From your project directory, start up your application by running docker-compose up.
$sudo apt instal
$sudo docker-compose up
  1. Visit your local host at port 5000 from link given in the terminal window image

The Initial Output seems like this :

image

Every time you refresh the site the number counter increases

image

Yayy !! If you followed the guide then you have succesfully deployed your first docker project.